67 research outputs found

    Recognition of human periodic motion: a frequency domain approach

    Get PDF
    We present a frequency domain analysis technique for modelling and recognizing human periodic movements from moving light displays (MLDs). We model periodic motions by motion templates, that consist of a set of feature power vectors extracted from unidentified vertical component trajectories of feature points. Motion recognition is carried out in the frequency domain, by comparing an observed motion template with pre-stored templates. This method contrasts with common spatio-temporal approaches. The proposed method is demonstrated by some examples of human periodic motion recognition in MLDs

    Dynamic segment-based sparse feature-point matching in articulate motion

    Get PDF
    We propose an algorithm for identifying articulated motion. The motion is represented by a sequence of 3D sparse feature-point data. The algorithm emphasizes a self-initializing identification phase for each uninterrupted data sequence, typically at the beginning or on resumption of tracking. We combine a dynamic segment-based hierarchial identification with a inter-frame tracking strategy for efficiency and robustness. We have tested the algorithm successfully using human motion data obtained from a marker-based optical motion capture (MoCap) system

    A tutorial on motion capture driven character animation

    Get PDF
    Motion capture (MoCap) is an increasingly important technique to create realistic human motion for animation. However MoCap data are noisy, the resulting animation is often inaccurate and unrealistic without elaborate manual processing of the data. In this paper, we will discuss practical issues for MoCap driven character animation, particularly when using commercial toolkits. We highlight open topics in this field for future research. MoCap animations created in this project will be demonstrated at the conference

    Tracking object poses in the context of robust body pose estimates

    Get PDF
    This work focuses on tracking objects being used by humans. These objects are often small, fast moving and heavily occluded by the user. Attempting to recover their 3D position and orientation over time is a challenging research problem. To make progress we appeal to the fact that these objects are often used in a consistent way. The body poses of different people using the same object tend to have similarities, and, when considered relative to those body poses, so do the respective object poses. Our intuition is that, in the context of recent advances in body-pose tracking from RGB-D data, robust object-pose tracking during human-object interactions should also be possible. We propose a combined generative and discriminative tracking framework able to follow gradual changes in object-pose over time but also able to re-initialise object-pose upon recognising distinctive body-poses. The framework is able to predict object-pose relative to a set of independent coordinate systems, each one centred upon a different part of the body. We conduct a quantitative investigation into which body parts serve as the best predictors of object-pose over the course of different interactions. We find that while object-translation should be predicted from nearby body parts, object-rotation can be more robustly predicted by using a much wider range of body parts. Our main contribution is to provide the first object-tracking system able to estimate 3D translation and orientation from RGB-D observations of human-object interactions. By tracking precise changes in object-pose, our method opens up the possibility of more detailed computational reasoning about human-object interactions and their outcomes. For example, in assistive living systems that go beyond just recognising the actions and objects involved in everyday tasks such as sweeping or drinking, to reasoning that a person has missed sweeping under the chair or not drunk enough water today. © 2014 Elsevier B.V. All rights reserved

    Human activity tracking from moving camera stereo data

    Get PDF
    We present a method for tracking human activity using observations from a moving narrow-baseline stereo camera. Range data are computed from the disparity between stereo image pairs. We propose a novel technique for calculating weighting scores from range data given body configuration hypotheses. We use a modified Annealed Particle Filter to recover the optimal tracking candidate from a low dimensional latent space computed from motion capture data and constrained by an activity model. We evaluate the method on synthetic data and on a walking sequence recorded using a moving hand-held stereo camera

    Tracking a walking person using activity-guided annealed particle filtering

    Get PDF
    Tracking human pose using observations from less than three cameras is a challenging task due to ambiguity in the available image evidence. This work presents a method for tracking using a pre-trained model of activity to guidesampling within an Annealed Particle Filtering framework. The approach is an example of model-based analysis-by-synthesis and is capable of robust tracking from less than 3 cameras with reduced numbers of samples. We test the scheme on a common dataset containing ground truth mo-tion capture data and compare against quantitative results for standard Annealed Particle Filtering. We find lower ab-solute and relative error scores for both monocular and 2-camera sequences using 80% fewer particles. © 2008 IEEE

    Movement and gesture recognition using deep learning and wearable-sensor technology

    Get PDF
    Pattern recognition of time-series signals for movement and gesture analysis plays an important role in many fields as diverse as healthcare, astronomy, industry and entertainment. As a new technique in recent years, Deep Learning (DL) has made tremendous progress in computer vision and Natural Language Processing (NLP), but largely unexplored on its performance for movement and gesture recognition from noisy multi-channel sensor signals. To tackle this problem, this study was undertaken to classify diverse movements and gestures using four developed DL models: a 1-D Convolutional neural network (1-D CNN), a Recurrent neural network model with Long Short Term Memory (LSTM), a basic hybrid model containing one convolutional layer and one recurrent layer (C-RNN), and an advanced hybrid model containing three convolutional layers and three recurrent layers (3+3 C-RNN). The models will be applied on three different databases (DB) where the performances of models were compared. DB1 is the HCL dataset which includes 6 human daily activities of 30 subjects based on accelerometer and gyroscope signals. DB2 and DB3 are both based on the surface electromyography (sEMG) signal for 17 diverse movements. The evaluation and discussion for the improvements and limitations of the models were made according to the result

    Behaviour based particle filtering for human articulated motion tracking

    Get PDF
    This paper presents an approach to human motion tracking using multiple pre-trained activity models for propagation of particles in Annealed Particle Filtering. Hidden Markov models are trained on dimensionally reduced joint angle data to produce models of activity. Particles are divided between models for propagation by HMM synthesis, before converging on a solution during the annealing process. The approach facilitates multi-view tracking of unknown subjects performing multiple known activities with low particle numbers

    Effectiveness of surface electromyography in pattern classification for upper limb amputees

    Get PDF
    This study was undertaken to explore 18 time domain (TD) and time-frequency domain (TFD) feature configurations to determine the most discriminative feature sets for classification. Features were extracted from the surface electromyography (sEMG) signal of 17 hand and wrist movements and used to perform a series of classification trials with the random forest classifier. Movement datasets for 11 intact subjects and 9 amputees from the NinaPro online database repository were used. The aim was to identify any optimum configurations that combined features from both domains and whether there was consistency across subject type for any standout features. This work built on our previous research to incorporate the TFD, using a Discrete Wavelet Transform with a Daubechies wavelet. Findings report configurations containing the same features combined from both domains perform best across subject type (TD: root mean square (RMS), waveform length, and slope sign changes; TFD: RMS, standard deviation, and energy). These mixed-domain configurations can yield optimal performance (intact subjects: 90.98%; amputee subjects: 75.16%), but with only limited improvement on single-domain configurations. This suggests there is limited scope in attempting to build a single absolute feature configuration and more focus should be put on enhancing the classification methodology for adaptivity and robustness under actual operating conditions
    • …
    corecore